Deep Neural Networks
Back to Home
01. Introduction
02. Classification Problems 1
03. Classification Problems 2
04. Linear Boundaries
05. Higher Dimensions
06. Perceptrons
07. Why "Neural Networks"?
08. Perceptrons as Logical Operators
09. Perceptron Trick
10. Perceptron Algorithm
11. Non-Linear Regions
12. Error Functions
13. Log-loss Error Function
14. Discrete vs Continuous
15. Softmax
16. One-Hot Encoding
17. Maximum Likelihood
18. Maximizing Probabilities
19. Cross-Entropy 1
20. Cross-Entropy 2
21. Multi-Class Cross Entropy
22. Logistic Regression
23. Gradient Descent
24. Perceptron vs Gradient Descent
25. Continuous Perceptrons
26. Non-linear Data
27. Non-Linear Models
28. Neural Network Architecture
29. Feedforward
30. Backpropagation
31. Keras
32. Mini Project: Students Admissions in Keras
33. Lesson Plan: Week 2
34. Training Optimization
35. Batch vs Stochastic Gradient Descent
36. Learning Rate Decay
37. Testing
38. Overfitting and Underfitting
39. Early Stopping
40. Regularization
41. Regularization 2
42. Dropout
43. Vanishing Gradient
44. Other Activation Functions
45. Local Minima
46. Random Restart
47. Momentum
48. Optimizers in Keras
49. Error Functions Around the World
50. Mini Project Intro
51. Mini Project: IMDB Data in Keras
52. Outro
Back to Home
41. Regularization 2
Regularization
Next Concept